Skip to content

[Quantization] Support more than one quant-compressor #415

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 12 commits into from
Aug 14, 2025

Conversation

dsikka
Copy link
Collaborator

@dsikka dsikka commented Aug 7, 2025

Summary

  • Allow more than one compressor to be applied to a given model
  • Updates the ModelCompressor.quantization_compressor to now be a dictionary, such that more than one quantization compressor can be supported
  • Adds mixed-precision as a new CompressionFormat - if more than one format is found within the model, mixed-precision is set as the model's global format in its config.json
  • Adds format to the QuantizationScheme and leverages this per-module format field in order to fetch the appropriate compressor to compress the model
  • Note: this is not supported for ModelCompressor.compress and ModelCompressor.decompress - only compress_model and decompress_model currently support this functionality as compress/decompress essentially only support global formats

Testing:

  • nightly passes

Next Steps:

  • Decompression for mixed-precision is currently not supported - we will eventually need this to run lm-evals/hf forward passes

Example Updates

  • For an NVFP4 + FP8 model, the recipe could look like this
quant_stage:
    quant_modifiers:
        QuantizationModifier:
            ignore: ["lm_head"]
            config_groups:
                group_0:
                    weights:
                        num_bits: 8
                        type: float
                        strategy: channel
                        dynamic: false
                        symmetric: true
                    input_activations:
                        num_bits: 8
                        type: float
                        strategy: token
                        dynamic: true
                        symmetric: true
                    targets: ["re:.*mlp.down_proj.*"]
                group_1:
                    weights:
                        num_bits: 4
                        type: float
                        strategy: tensor_group
                        dynamic: false
                        symmetric: true
                        group_size: 16
                    input_activations:
                        num_bits: 4
                        type: float
                        strategy: tensor_group
                        dynamic: local
                        symmetric: true
                        group_size: 16
                    targets: ["re:.*mlp.gate_proj.*", "re:.*mlp.up_proj.*", "re:.*self_attn.k_proj.*", "re:.*self_attn.o_proj.*", "re:.*self_attn.q_proj.*", "re:.*self_attn.v_proj.*"]
"""

New config:

{
  "architectures": [
    "LlamaForCausalLM"
  ],
  "attention_bias": false,
  "attention_dropout": 0.0,
  "bos_token_id": 1,
  "eos_token_id": 2,
  "head_dim": 64,
  "hidden_act": "silu",
  "hidden_size": 2048,
  "initializer_range": 0.02,
  "intermediate_size": 5632,
  "max_position_embeddings": 2048,
  "mlp_bias": false,
  "model_type": "llama",
  "num_attention_heads": 32,
  "num_hidden_layers": 22,
  "num_key_value_heads": 4,
  "pretraining_tp": 1,
  "quantization_config": {
    "config_groups": {
      "group_0": {
        "format": "nvfp4-pack-quantized",
        "input_activations": {
          "actorder": null,
          "block_structure": null,
          "dynamic": "local",
          "group_size": 16,
          "num_bits": 4,
          "observer": "minmax",
          "observer_kwargs": {},
          "strategy": "tensor_group",
          "symmetric": true,
          "type": "float"
        },
        "output_activations": null,
        "targets": [
          "re:.*mlp.gate_proj.*",
          "re:.*mlp.up_proj.*",
          "re:.*self_attn.k_proj.*",
          "re:.*self_attn.o_proj.*",
          "re:.*self_attn.q_proj.*",
          "re:.*self_attn.v_proj.*"
        ],
        "weights": {
          "actorder": null,
          "block_structure": null,
          "dynamic": false,
          "group_size": 16,
          "num_bits": 4,
          "observer": "minmax",
          "observer_kwargs": {},
          "strategy": "tensor_group",
          "symmetric": true,
          "type": "float"
        }
      },
      "group_1": {
        "format": "float-quantized",
        "input_activations": {
          "actorder": null,
          "block_structure": null,
          "dynamic": true,
          "group_size": null,
          "num_bits": 8,
          "observer": null,
          "observer_kwargs": {},
          "strategy": "token",
          "symmetric": true,
          "type": "float"
        },
        "output_activations": null,
        "targets": [
          "re:.*mlp.down_proj.*"
        ],
        "weights": {
          "actorder": null,
          "block_structure": null,
          "dynamic": false,
          "group_size": null,
          "num_bits": 8,
          "observer": "minmax",
          "observer_kwargs": {},
          "strategy": "channel",
          "symmetric": true,
          "type": "float"
        }
      }
    },
    "format": "mixed-precision",
    "global_compression_ratio": null,
    "ignore": [
      "lm_head"
    ],
    "kv_cache_scheme": null,
    "quant_method": "compressed-tensors",
    "quantization_status": "compressed"
  },
  "rms_norm_eps": 1e-05,
  "rope_scaling": null,
  "rope_theta": 10000.0,
  "tie_word_embeddings": false,
  "torch_dtype": "bfloat16",
  "transformers_version": "4.55.0",
  "use_cache": true,
  "vocab_size": 32000

shanjiaz
shanjiaz previously approved these changes Aug 11, 2025
Copy link
Contributor

@shanjiaz shanjiaz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎉 LGTM!

Copy link
Member

@rahul-tuli rahul-tuli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice feature, agree with @kylesayrs 's recommendation, + updating docstrings and adding a test specifically for mixed precision compression/decompression

@dsikka dsikka force-pushed the support_multi_compressor branch from cd324dd to 8b5d4c9 Compare August 12, 2025 19:28
@kylesayrs
Copy link
Contributor

kylesayrs commented Aug 12, 2025

Seems like there are 3 sources of truth for quantization format

infer_per_module_quantization_format, QuantizationScheme.format, and _fetch_unique_quantization_formats (which also uses self.quantization_config.format).

It'd be nice if QuantizationScheme.format was the source of truth on a per-module basis, and a simple get_model_compression_format was the source of truth on a per-model basis. Ideally this get_model_compress_format function could return mixed if it's only used for serialization. But if we really need to eagerly* initialize compressors, this function could return a list of formats.

def get_model_compression_format(model: torch.nn.Module) -> Set[CompressionFormat]:
    return set(
        getattr_chain(module, "quantization_scheme.format", CompressionFormat.dense)
        for module in model.modules()
    )
  • eagerly refers to initializing them at model compressor init time, rather than at compression time and storing them in a default dict

@dsikka
Copy link
Collaborator Author

dsikka commented Aug 12, 2025

Seems like there are 3 sources of truth for quantization format

infer_per_module_quantization_format, QuantizationScheme.format, and _fetch_unique_quantization_formats (which also uses self.quantization_config.format).

It'd be nice if QuantizationScheme.format was the source of truth on a per-module basis, and a simple get_model_compression_format was the source of truth on a per-model basis. Ideally this get_model_compress_format function could return mixed if it's only used for serialization. But if we really need to eagerly* initialize compressors, this function could return a list of formats.

def get_model_compression_format(model: torch.nn.Module) -> Set[CompressionFormat]:
    return set(
        getattr_chain(module, "quantization_scheme.format", CompressionFormat.dense)
        for module in model.modules()
    )
  • eagerly refers to initializing them at init time, rather than at compression time and storing them in a default dict

infer_per_module_quantization_format is called to set QuantizationScheme.format so there is no separate source of truth. This was the same functionality we had previously to determine the global format. This is because the format is directly tied into how the models work in vLLM and not something we expect users to know about, unless in cases they want to override the global compression format. We can move the inferring logic to compressed-tensors if we wanted to refactor the compressor lifecycle. It makes more sense there anyway

We still support the global compression format to be overwritten but this is not a common pathway which is why it was not part of this PR change for the per-module case.

Ideally, we can also update our preset schemes to include the compression formats as well. But again, not what this PR is targeting as not our typical user pathway.

I agree we can remove _fetch_unique_quantization_formats from being called if going through from_model_pretrained but our compressors support multiple entry points which require format resolution

Copy link
Contributor

@kylesayrs kylesayrs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Has this been tested with model reloading? I see a couple potential issues there.

In the case where we want to load a model which has mixed compression

  1. from_pretrained_model and from_compression_config both set quantization_config.format to be "mixed". If quantization_config.format is set, _fetch_unique_quantization_formats will not be called
  2. Since the model_compressor assumes that module formats have previously been set by infer_per_module_quantization_format and this function only, will this work for pathways in which we compress models without calling infer_per_module_quantization_format first?

There seems to be implicit coupling of infer_per_module_quantization_format, ModelCompressor.from_pretrained_model and ModelCompressor.compress/decompress, where infer_per_module_quantization_format must be called before the others. If we're going to do this, we should raise errors if a module has scheme.format = None.

kylesayrs
kylesayrs previously approved these changes Aug 13, 2025
Copy link
Contributor

@kylesayrs kylesayrs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving with the following list of follow-ups

Follow ups directly in scope of this PR

  1. Consider inferring compression format on a per-module. This enables users to manually specify formats (useful for debugging at the least), and more importantly decouples compression from requiring that infer_quantization_format be called prior.
def get_module_format(module):
    qscheme = module.quantization_scheme
    sscheme = module.sparsity_scheme  # or from a map

    inferred_format = infer_compression_format(qscheme, sscheme)
    if qscheme is not None and qscheme != inferred_format:
        # warn
    ...

We can still use a global override by passing the global override to this function

  1. Consider only inferring the format label at config serialization time, rather than prior. This avoids having to pass and parse the format in multiple places as well as avoids user or model loading code from accidentally passing "mixed" as a format.
def update_config(self, model):
    config[QUANTIZATION_CONFIG_NAME].format = get_model_format(model)

def get_model_format(model):
    return set(get_module_format(module) for module in model.modules())

Follow ups that are related but might make implementation easier

  1. Consider refactoring compressors into functions, not objects
def compress_model(model):
    for name, module in model.named_modules():
        format = get_compression_format(module)
        module = compress_module(module, format)
        set_module(model, name, module)

def compress_module(module, format):
    if format == CompressionFormat.dense:
        return module
    if format == CompressionFormat.Sparse24:
        return Sparse24Compressor.compress_module(module)
    ...
  1. Consider refactoring format to not be nullable. This reduces required parsing logic and tightens type hinting

Copy link
Contributor

@brian-dellabetta brian-dellabetta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Commented/marked as resolved the thread on passing in quantization_config, one last comment on naming quantization_compressor

@dsikka dsikka enabled auto-merge (squash) August 14, 2025 21:38
Copy link
Contributor

@brian-dellabetta brian-dellabetta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚢

@dsikka dsikka merged commit fac7e4a into main Aug 14, 2025
1 check passed
@dsikka dsikka deleted the support_multi_compressor branch August 14, 2025 22:03
@jiqing-feng jiqing-feng mentioned this pull request Aug 15, 2025
dsikka added a commit to vllm-project/llm-compressor that referenced this pull request Aug 15, 2025
SUMMARY:
- Requires neuralmagic/compressed-tensors#415
- Updates `infer_quantization_format` to be
`infer_per_module_quantization_format` such that instead of returning a
global format, a per module format is assigned to each module to be used
during compression time. All unique compression formats are returned
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants